# Knowledge Distillation Model
Huihui Ai.deepseek R1 Distill Qwen 32B Abliterated GGUF
This is a quantized version of a large language model, aiming to make knowledge more accessible and usable.
Large Language Model
H
DevQuasar
572
3
Deepseek Ai.deepseek R1 Distill Llama 8B GGUF
DeepSeek-R1-Distill-Llama-8B is an 8B-parameter large language model based on the Llama architecture, optimized through distillation training for text generation tasks.
Large Language Model
D
DevQuasar
320
3
Semantic Xlmr
A multilingual sentence embedding model based on sentence-transformers, specially optimized for Bengali, suitable for semantic similarity calculation and clustering analysis
Text Embedding
Transformers

S
headlesstech
28
0
Distilbert Base Cased Distilled Squad Finetuned Squad
Apache-2.0
This model is a fine-tuned version of distilbert-base-cased-distilled-squad, suitable for question-answering tasks
Question Answering System
Transformers

D
ms12345
14
0
Distilbert Onnx
Apache-2.0
This is a question-answering model fine-tuned on the SQuAD v1.1 dataset using knowledge distillation techniques, based on the DistilBERT-base-cased model.
Question Answering System
Transformers English

D
philschmid
8,650
2
Distilgpt2
Apache-2.0
DistilGPT2 is a lightweight distilled version of GPT-2 with 82 million parameters, retaining GPT-2's core text generation capabilities while being smaller and faster.
Large Language Model English
D
distilbert
2.7M
527
Featured Recommended AI Models